224 research outputs found

    Compression of Deep Neural Networks on the Fly

    Full text link
    Thanks to their state-of-the-art performance, deep neural networks are increasingly used for object recognition. To achieve these results, they use millions of parameters to be trained. However, when targeting embedded applications the size of these models becomes problematic. As a consequence, their usage on smartphones or other resource limited devices is prohibited. In this paper we introduce a novel compression method for deep neural networks that is performed during the learning phase. It consists in adding an extra regularization term to the cost function of fully-connected layers. We combine this method with Product Quantization (PQ) of the trained weights for higher savings in storage consumption. We evaluate our method on two data sets (MNIST and CIFAR10), on which we achieve significantly larger compression rates than state-of-the-art methods

    Cross-dimensional Weighting for Aggregated Deep Convolutional Features

    Full text link
    We propose a simple and straightforward way of creating powerful image representations via cross-dimensional weighting and aggregation of deep convolutional neural network layer outputs. We first present a generalized framework that encompasses a broad family of approaches and includes cross-dimensional pooling and weighting steps. We then propose specific non-parametric schemes for both spatial- and channel-wise weighting that boost the effect of highly active spatial responses and at the same time regulate burstiness effects. We experiment on different public datasets for image search and show that our approach outperforms the current state-of-the-art for approaches based on pre-trained networks. We also provide an easy-to-use, open source implementation that reproduces our results.Comment: Accepted for publications at the 4th Workshop on Web-scale Vision and Social Media (VSM), ECCV 201

    Efficient On-the-fly Category Retrieval using ConvNets and GPUs

    Full text link
    We investigate the gains in precision and speed, that can be obtained by using Convolutional Networks (ConvNets) for on-the-fly retrieval - where classifiers are learnt at run time for a textual query from downloaded images, and used to rank large image or video datasets. We make three contributions: (i) we present an evaluation of state-of-the-art image representations for object category retrieval over standard benchmark datasets containing 1M+ images; (ii) we show that ConvNets can be used to obtain features which are incredibly performant, and yet much lower dimensional than previous state-of-the-art image representations, and that their dimensionality can be reduced further without loss in performance by compression using product quantization or binarization. Consequently, features with the state-of-the-art performance on large-scale datasets of millions of images can fit in the memory of even a commodity GPU card; (iii) we show that an SVM classifier can be learnt within a ConvNet framework on a GPU in parallel with downloading the new training images, allowing for a continuous refinement of the model as more images become available, and simultaneous training and ranking. The outcome is an on-the-fly system that significantly outperforms its predecessors in terms of: precision of retrieval, memory requirements, and speed, facilitating accurate on-the-fly learning and ranking in under a second on a single GPU.Comment: Published in proceedings of ACCV 201

    PlaNet - Photo Geolocation with Convolutional Neural Networks

    Full text link
    Is it possible to build a system to determine the location where a photo was taken using just its pixels? In general, the problem seems exceptionally difficult: it is trivial to construct situations where no location can be inferred. Yet images often contain informative cues such as landmarks, weather patterns, vegetation, road markings, and architectural details, which in combination may allow one to determine an approximate location and occasionally an exact location. Websites such as GeoGuessr and View from your Window suggest that humans are relatively good at integrating these cues to geolocate images, especially en-masse. In computer vision, the photo geolocation problem is usually approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. While previous approaches only recognize landmarks or perform approximate matching using global image descriptors, our model is able to use and integrate multiple visible cues. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman levels of accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, we demonstrate that this model achieves a 50% performance improvement over the single-image model

    Visual Link Retrieval in a Database of Paintings

    Get PDF
    This paper examines how far state-of-the-art machine vision algorithms can be used to retrieve common visual patterns shared by series of paintings. The research of such visual patterns, central to Art History Research, is challenging because of the diversity of similarity criteria that could relevantly demonstrate genealogical links. We design a methodology and a tool to annotate efficiently clusters of similar paintings and test various algorithms in a retrieval task. We show that pretrained convolutional neural network can perform better for this task than other machine vision methods aimed at photograph analysis. We also show that retrieval performance can be significantly improved by fine-tuning a network specifically for this task

    Low-Resolution Face Recognition

    Get PDF
    Whilst recent face-recognition (FR) techniques have made significant progress on recognising constrained high-resolution web images, the same cannot be said on natively unconstrained low-resolution images at large scales. In this work, we examine systematically this under-studied FR problem, and introduce a novel Complement Super-Resolution and Identity (CSRI) joint deep learning method with a unified end-to-end network architecture. We further construct a new large-scale dataset TinyFace of native unconstrained low-resolution face images from selected public datasets, because none benchmark of this nature exists in the literature. With extensive experiments we show there is a significant gap between the reported FR performances on popular benchmarks and the results on TinyFace, and the advantages of the proposed CSRI over a variety of state-of-the-art FR and super-resolution deep models on solving this largely ignored FR scenario. The TinyFace dataset is released publicly at: https://qmul-tinyface.github.io/.Comment: Accepted by 14th Asian Conference on Computer Visio

    Observation of a J^PC = 1-+ exotic resonance in diffractive dissociation of 190 GeV/c pi- into pi- pi- pi+

    Get PDF
    The COMPASS experiment at the CERN SPS has studied the diffractive dissociation of negative pions into the pi- pi- pi+ final state using a 190 GeV/c pion beam hitting a lead target. A partial wave analysis has been performed on a sample of 420000 events taken at values of the squared 4-momentum transfer t' between 0.1 and 1 GeV^2/c^2. The well-known resonances a1(1260), a2(1320), and pi2(1670) are clearly observed. In addition, the data show a significant natural parity exchange production of a resonance with spin-exotic quantum numbers J^PC = 1-+ at 1.66 GeV/c^2 decaying to rho pi. The resonant nature of this wave is evident from the mass-dependent phase differences to the J^PC = 2-+ and 1++ waves. From a mass-dependent fit a resonance mass of 1660 +- 10+0-64 MeV/c^2 and a width of 269+-21+42-64 MeV/c^2 is deduced.Comment: 7 page, 3 figures; version 2 gives some more details, data unchanged; version 3 updated authors, text shortened, data unchange

    First Measurement of Chiral Dynamics in \pi^- \gamma -> \pi^- \pi^- \pi^+

    Full text link
    The COMPASS collaboration at CERN has investigated the \pi^- \gamma -> \pi^- \pi^- \pi^+ reaction at center-of-momentum energy below five pion masses, sqrt(s) < 5 m(\pi), embedded in the Primakoff reaction of 190 GeV pions impinging on a lead target. Exchange of quasi-real photons is selected by isolating the sharp Coulomb peak observed at smallest momentum transfers, t' < 0.001 (GeV/c)^2. Using partial-wave analysis techniques, the scattering intensity of Coulomb production described in terms of chiral dynamics and its dependence on the 3\pi-invariant mass m(3\pi) = sqrt(s) were extracted. The absolute cross section was determined in seven bins of s\sqrt{s} with an overall precision of 20%. At leading order, the result is found to be in good agreement with the prediction of chiral perturbation theory over the whole energy range investigated.Comment: 10 pages, 5 figure
    • …
    corecore